Authors: Mauro Venticinque | Angelo Schillaci | Daniele Tambone

GitHub project: Bank-Marketing

Date: 2025-05-23

Abstract

1 Introduction

In this project, we analyze data from a Portuguese banking institution’s direct marketing campaigns to identify key factors influencing customer subscription to term deposits.

A deposit account is a bank account maintained by a financial institution in which a customer can deposit and withdraw money. Deposit accounts can be savings accounts, current accounts or any of several other types of accounts explained below.

The dataset includes client demographics, previous campaign interactions, and economic indicators. Our goal is to develop insights that will enhance the effectiveness of future marketing strategies. By applying supervised learning techniques, we aim to predict customer responses and optimize outreach efforts for better engagement and conversion rates.

The report will begin with an Exploratory Data Analysis, examining the variables and their relationship with the target attribute (subscribed) to identify the most influential factors.

2 Exploratory Data Analysis

2.1 Variable descriptions

Bank client data:

  1. age (Integer): age of the customer
  2. job (Categorical): occupation
  3. marital (Categorical): marital status
  4. education (Categorical): education level
  5. default (Binary): has credit in default?
  6. housing (Binary): has housing loan?
  7. loan (Binary): has personal loan?
  8. contact (Categorical): contact communication type
  9. month (Categorical): last contact month of year
  10. day_of_week (Integer): last contact day of the week
  11. duration (Integer): last contact duration, in seconds (numeric). Important note: this attribute highly affects the output target (e.g., if duration=0 then y=‘no’). Yet, the duration is not known before a call is performed. Also, after the end of the call y is obviously known. Thus, this input should only be included for benchmark purposes and should be discarded if the intention is to have a realistic predictive model

Other attributes:

  1. campaign (Integer): number of contacts performed during this campaign and for this client (numeric, includes last contact)
  2. pdays (Integer): number of days that passed by after the client was last contacted from a previous campaign (numeric; -1 means client was not previously contacted)
  3. previous (Integer): number of contacts performed before this campaign and for this client
  4. poutcome (Categorical): outcome of the previous marketing campaign (categorical: ‘failure’,‘nonexistent’,‘success’)

Social and economic context attributes:

  1. emp.var.rate (Integer): employment variation rate - quarterly indicator
  2. cons.price.idx (Integer): consumer price index - monthly indicator
  3. cons.conf.idx (Integer): consumer confidence index - monthly indicator
  4. euribor3m (Integer): euribor 3 month rate - daily indicator
  5. nr.employed (Integer): number of employees - quarterly indicator

Output variable (desired target):

  1. subscribed (Binary): has the client subscribed a term deposit?

Source: UCI Machine Learning Repository

Note: In our dataset there isn’t the bank balance variable

More details

Data summary
Name train
Number of rows 32950
Number of columns 21
_______________________
Column type frequency:
character 11
numeric 10
________________________
Group variables None

Variable type: character

skim_variable n_missing complete_rate min max empty n_unique whitespace
job 0 1 6 13 0 12 0
marital 0 1 6 8 0 4 0
education 0 1 7 19 0 8 0
default 0 1 2 7 0 3 0
housing 0 1 2 7 0 3 0
loan 0 1 2 7 0 3 0
contact 0 1 8 9 0 2 0
month 0 1 3 3 0 10 0
day_of_week 0 1 3 3 0 5 0
poutcome 0 1 7 11 0 3 0
subscribed 0 1 2 3 0 2 0

Variable type: numeric

skim_variable n_missing complete_rate mean sd p0 p25 p50 p75 p100 hist
age 0 1 40.04 10.45 17.00 32.00 38.00 47.00 98.00 ▅▇▃▁▁
duration 0 1 258.66 260.83 0.00 102.00 180.00 318.00 4918.00 ▇▁▁▁▁
campaign 0 1 2.57 2.77 1.00 1.00 2.00 3.00 43.00 ▇▁▁▁▁
pdays 0 1 961.90 188.33 0.00 999.00 999.00 999.00 999.00 ▁▁▁▁▇
previous 0 1 0.17 0.49 0.00 0.00 0.00 0.00 7.00 ▇▁▁▁▁
emp.var.rate 0 1 0.08 1.57 -3.40 -1.80 1.10 1.40 1.40 ▁▃▁▁▇
cons.price.idx 0 1 93.57 0.58 92.20 93.08 93.75 93.99 94.77 ▁▆▃▇▂
cons.conf.idx 0 1 -40.49 4.63 -50.80 -42.70 -41.80 -36.40 -26.90 ▅▇▁▇▁
euribor3m 0 1 3.62 1.74 0.63 1.34 4.86 4.96 5.04 ▅▁▁▁▇
nr.employed 0 1 5167.01 72.31 4963.60 5099.10 5191.00 5228.10 5228.10 ▁▁▃▁▇

The dataset includes 21 variables and 32,950 rows, with no missing values.
Categorical variables like job and education show good diversity, while default, loan, and housing have only 3 unique values.

Among numeric variables, age has a fairly normal distribution (mean ≈ 40, sd ≈ 10), while duration and pdays are highly skewed, with extreme values up to 4918 and 999 respectively.
Some variables (e.g., campaign, previous) have a low median but long tails, indicating that most observations are clustered at low values.
Macroeconomic variables such as emp.var.rate, euribor3m, and nr.employed are more stable, with tight interquartile ranges, suggesting consistent economic conditions during data collection.

2.2 Analysis of distributions

Firstly we see that this dataset are unbaleanced, with the majority of people that have not subscribed.

Correlation and Pairwise Relationships

Correlation Matrix
The correlation matrix reveals clear patterns among the numerical variables. Notably, euribor3m, nr.employed, and emp.var.rate are strongly positively correlated with each other, these suggest these variables capture similar information about the economic environment. This should be taken into account in predictive modeling, as using them together could lead to multicollinearity. In contrast, variables like campaign, pdays, and previous show very weak correlations with most other features, indicating they may contribute more independently to the model.

Scatterplot Matrix by Target
Several variables, such as duration and pdays, show highly skewed distributions, which could influence model performance and may benefit from transformations (e.g., log or binning).While some variables exhibit linear trends (e.g., euribor3m vs nr.employed), many scatterplots show dispersed or nonlinear patterns. This suggests that simple linear models may not fully capture the complexity in the data.

In certain plots, the blue points (subscribed) are concentrated in specific areas, showing the key factors that influenced successful subscriptions.

Distribution of Subscribed across Different Variable

Box plot of age
It is harder to see older people say no

Box plot of emp.var.rate
Text

Box plot of euribor3m
Text

Client data

Distribution of Age
The age distribution is right-skewed, with a peak around 30–40 years old. The proportion of people that have subscribed is higher among those over 60.This may be due to greater financial stability in older age groups.

Distribution of Job
The distribution of the occupation is not uniform, with the majority of people that are admin. The proportion of people that have subscribed is among the higest between all the occupation. This is probably due to the fact that people that are admin have a higher income and are more likely to subscribe. While student and retired people have a higher proportion of subscription, this explain that we saw in the previous plot that the older people and the people with higher education level are more likely to subscribe.

Distribution of Education
About Education Level, we can see that the distribution of the education level is not uniform, with the majority of people that have a university degree. The proportion of people that have a university degree and that have subscribed is among the higest between all the education level. This is probably due to the fact that people that have a university degree have a higher income and are more likely to subscribe.

Distribution of Marital status
Text.

Distribution of Contact
Text.

Previous Campaign Data

Distribution of Contacts
About previous campaign, while most clients were not previously contacted, the success rate is visibly higher among those who were previously contacted more than once or had a successful prior outcome. This suggests that prior engagement is positively associated with subscription, but they are a small part of sample.

Temporal data

Distribution of Days of Week
The distribution of the last contact day of the week is uniform, with the majority of people that have been contacted on Thursday. The proportion of people that have subscribed is among the higest when the last contact day of the week is on the middle of week.

Distribution of Months
Instead, the distribution of the last contact month of the year is not uniform, with the majority of people that have been contacted in May. The proportion of people that have subscribed is among the higest when the last contact month of the year is in March, December, September and October. This is probably due to the fact that people are more likely to subscribe when they have more money and not during the summer.

Distribution of Duration
The duration of the last contact is right-skewed, with a peak around 0-100 seconds. The proportion of people that have subscribed is higher among people that have been contacted for a longer duration. This is probably due to the fact that people that have been contacted for a longer duration are more interested to subscribe.

Social and economic data

Distribution of Employment Variation
The distribution of the employment variation rate is not uniform, with the majority of people that have a positive or zero employment variation rate. The proportion of people that have subscribed is among the higest when the employment variation rate is negative. This is probably due to the fact that people are more propensity to subscribe when they are in recession.

Distribution of Days of Consumer Price Index
The proportion of people that have subscribed is higher when the CPI is lower than 93. This is probably due to the fact that people when the CPI is lower have more money and are more likely to subscribe.

Distribution of Consumer Confidence Index
The proportion of people that have subscribed is higher when the consumer confidence index is higher than -40. This is probably due to the fact that people when the consumer confidence index is higher have more money and have more propensity to subscribe.

Distribution of Euribor 3 month rate
When considering the Euribor rate, one might think that a lower Euribor would result in a decline in savings rate since most European banks align their deposit interest rate offers with ECB indexes, particularly with the three month Euribor. Still, as we see, this plot shows the opposite, with a lower Euribor corresponding to a higher probability for deposit subscription, and the same probability decreasing along with the increase of the three month Euribor.

2.3 Conclusion

The Exploratory Data Analysis reveals several important insights into the factors that influence the likelihood of subscription in this dataset. Below there is a summary of the key findings:

  • The dataset is unbalanced, with the majority of contacted individuals not subscribing.
  • Both younger and older individuals exhibit a higher likelihood of subscribing compared to those in middle age.
  • Socio-demographic factors, such as education and jobs, appear to influence subscription rates, for example, individuals in administrative roles and those with higher education levels tend to subscribe more often.
  • Prior interaction with the campaign, especially repeated contacts or past successful outcomes, is positively associated with subscription.
  • Subscription rates vary by month, with peaks in March, December, September, and October. Additionally, longer call durations are linked to a higher likelihood of subscription.
  • All economic variables examined show significant associations with subscription. Specifically, lower CPI, a negative employment variation rate, and higher CCI are correlated with increased subscription rates.

In summary, the analysis suggests that financial conditions, previous campaign interactions, and macroeconomic indicators are strong predictors of subscription behavior. Demographic factors such as age, occupation, and education level also contribute meaningfully to the outcome.

In the next section, we will use these EDA findings to conduct a preliminary skim of the most influential variables, based on the visual trends observed in the plots.

3 Model selection

In this section, we explore different classification models to predict whether a client will subscribe to a term deposit.

Before training the models, we applied a transformation algorithm to convert categorical variables into numerical format. This is a crucial step in the data preprocessing phase, as many machine learning algorithms require numerical input. We used one-hot encoding to make categorical variables compatible with the classification models. This method represents each category as a binary variable, avoiding the introduction of arbitrary numerical orderings among categories. In this way, we ensure a correct statistical interpretation of qualitative variables and improve the effectiveness of the model training process.

However, among the classification models considered, LDA (Linear Discriminant Analysis) and QDA (Quadratic Discriminant Analysis) are not suitable in our case due to the nature of the predictor variables. In particular, most of the independent variables are binary and do not satisfy the fundamental assumption of normal distribution within each class, which is required by both methods. Furthermore, both models rely on covariance structures, whose interpretation becomes limited when applied to dichotomous variables.

For these reasons, we decided not to pursue further analysis with LDA and QDA and instead focused on models that are better aligned with the structure of the data, such as Logistic Regression and Random Forest.

3.1 Preprocessing

Based on the Exploratory Data Analysis (EDA), we selected only the most relevant variables.

With a view to training the model, we apply one-hot encoding. We obtain the following dataset:

21 Variable
Variable Type
age int
single bool
cellular bool
low_call bool
previous int
negative_emp bool
low_cpi bool
high_cci bool
low_euribor bool
university bool
p_course bool
job_student bool
job_retired bool
job_admin bool
month_sep bool
month_oct bool
month_dec bool
month_mar bool
p_failure bool
p_success bool
target bool

3.2 Logistic Regression

Logistic regression is a widely used statistical model for binary classification tasks. It is based on the sigmoid (logistic) function, which maps any real-valued input into the interval (0, 1), making the output interpretable as a probability. Specifically, the probability that an observation belongs to class 1 is given by:

\(P(Y=1|X=x)=p(x)=\frac{e^{\beta_0+\beta_1x_1+\dots + \beta_n x_n}}{1+e^{\beta_0+\beta_1x_1+\dots + \beta_n x_n}}\)

To perform classification, a decision threshold is applied: if \(p(x)\) exceeds the threshold (commonly 0.5), the observation is assigned to class 1; otherwise, it is assigned to class 0.

Before training the model on the training set, we applied two variable selection methods to the entire dataset in order to reduce the large number of predictors. The selection criteria were:

  • Stepwise selection in both directions, that optimizes the AIC at each step.

  • LASSO, which, as supported by the literature, often outperforms stepwise methods.

After that we checked for VIF, but for both models there wasn’t any multicollinearity.

Once the relevant variables were selected using both methods, we employed k-fold cross-validation to obtain stable estimates of accuracy, misclassification error, and other useful metrics which will be used for comparison. We chose 10 folds, as it offers a good balance between computational efficiency, compared to LOOCV, and reliable model evaluation, as compared to 5-fold cross-validation.

The variables we got from stepwise and LASSO are the following ones:

Model Variable Comparison
Full_dataset Stepwise_Model Lasso_Model
age
single x
cellular x x
low_call x
previous x
negative_emp x x
low_cpi x x
high_cci x x
low_euribor x x
university x x
p_course x
job_student x x
job_retired x x
job_admin x
month_sep x x
month_oct x x
month_dec x x
month_mar x x
p_failure x x
p_success x x
total 20 14

As we can see, LASSO was able to shrink the number of variables more than Stepwise. It’s a simpler model. Let’s now compare the two models at different thresholds. The thresholds maximize:

  • Accuracy, that takes into account just the general accuracy of the model.

  • F1 which is a metric useful in unbalanced dataset like ours, since it considers both precision (how many predicted positives are actual positives) and recall (how many actual positives are detected).

We also have computed Sensitivity and Specificity.

Model Performance at Different Thresholds
Model Threshold Accuracy F1 Sensitivity Specificity
Stepwise 0.5 0.8990 0.3182 0.2093 0.9865
Stepwise 0.2 0.8633 0.4833 0.5673 0.9009
LASSO 0.5 0.8990 0.3216 0.2126 0.9861
LASSO 0.2 0.8528 0.4676 0.5735 0.8883

The table shows that both models perform similarly in terms of accuracy and specificity. However, when the threshold is lowered from 0.5 to 0.2, both models achieve higher F1 scores and sensitivity, though at the expense of reduced accuracy and specificity.

Between the two models, the LASSO-based logistic regression is slightly preferable, as it achieves comparable or slightly better F1 scores while benefiting from variable selection and model simplicity.

Given the imbalance in the dataset, we prioritize a threshold that maximizes the F1 score rather than overall accuracy. This is because both false positives (wasting resources contacting uninterested customers) and false negatives (missing likely subscribers) are costly in this context. Therefore, a threshold that better balances precision and recall, reflected by a higher F1 score, is more appropriate for the bank’s decision-making.

3.3 Decision tree

3.3.1 Random Forest

Let us trained a Random Forest classifier. We set the mtry = n/3, where n is the number of variable, to control the number of variables randomly selected at each split. The model was trained on the full training set with 500 trees and feature importance enabled.

The model return an out-of-bag (OOB) error rate of ‘r round(tail(rf_model$err.rate[, “OOB”], 1) * 100, 2)’, it’s a good level of error, indicates a fairly robust generalization without overfitting.

The following graphs reppresent the importance of each variable, for the accuracy and Gini impurity.

Both measures highlight similar variables as important, suggesting consistent influence of features such as ‘age’, ‘p_success’, and ‘high_cci’. However, there are slight differences in ranking due to how each metric evaluates importance.

3.3.2 Boosting

set.seed(123)
boost_train <- gbm(target ~ ., data = train_set2, distribution = "bernoulli", n.trees = 5000, interaction.depth = 3)

tmp$var <- factor(tmp$var, levels = tmp$var[order(tmp$rel.inf)])

ggplot(tmp, aes(x = var, y = rel.inf)) +
  geom_bar(stat = "identity", fill = "steelblue", alpha = 0.9) +
  coord_flip() +
  labs(title = "Relative Importance of Variables",
       x = NULL,
       y = "Relative Importance") +
  theme_minimal()

yhat.boost <- predict(boost_train, newdata = test_set2, n.trees = 5000)
table(pred = yhat.boost > 0.5, actual=test_set2$target)
##        actual
## pred       0    1
##   FALSE 5764  613
##   TRUE    83  129

4 Conclusion